Skip to content
This repository has been archived by the owner on Jun 12, 2018. It is now read-only.

Switch to new message store #1310

Open
wants to merge 14 commits into
base: develop
Choose a base branch
from

Conversation

jerith
Copy link
Member

@jerith jerith commented Jul 8, 2015

No description provided.

@@ -152,7 +152,9 @@ def send_scheduled_messages(self, conv):
def send_message(self, batch_id, to_addr, content, msg_options):
msg = yield self.send_to(
to_addr, content, endpoint='default', **msg_options)
yield self.vumi_api.mdb.add_outbound_message(msg, batch_ids=[batch_id])
# XXX: Do we need to do this?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not given that we now have middleware that knows how to store messages for conversations. The batch_id parameter can likely go too. Perhaps we should open an issue for this for later?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created #1322 for this.

@hodgestar
Copy link
Contributor

Left some comments but generally looks good to me.

@JayH5
Copy link
Contributor

JayH5 commented Jul 8, 2015

lgtm too

@@ -354,9 +353,10 @@ def _update_tag_data_for_acquire(self, user_account, tag):
# The batch we create here gets added to the tag_info and we can fish
# it out later. When we replace this with proper channel objects we can
# stash it there like we do with conversations and routers.
yield self.api.mdb.batch_start([tag], user_account=user_account.key)
yield self.api.get_batch_manager().batch_start(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor thing... maybe put the batch manager in a variable here since get_batch_manager() creates a new manager (although they're pretty lightweight).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, thanks.

@JayH5
Copy link
Contributor

JayH5 commented Jul 9, 2015

👍 from me

@JayH5
Copy link
Contributor

JayH5 commented Jul 29, 2015

👍 from me although I think someone with a bit more vumi-go knowledge needs to read over 654cf8e

@hodgestar
Copy link
Contributor

I left some comments about the message rate counting, but otherwise looks good.

@jerith
Copy link
Member Author

jerith commented Jul 31, 2015

How do we end up with timestamps that are less than start if we specify start as the start of the range to retrieve messages for?

Old code that I forgot to remove. Thanks for picking that up.

(I was initially attempting to match the behaviour of the previous implementation, which started counting from the most recent message rather than the current time.)

How big is the default index page currently? Perhaps we should specify an explicit value in this method? A bit concerned that any busy campaign will do well over one page per five minutes.

The default index page is 1k results. When we were using the batch info cache, we stored up to 2k keys. The effect of having more keys in the window than we can fit in the page is the same in both cases, but the threshold is a little different.

This information should really come from metrics rather than message store lookups, but that's out of scope for this PR.

@hodgestar
Copy link
Contributor

hodgestar commented Jul 31, 2015 via email

@hodgestar
Copy link
Contributor

👍 from me on what's here currently.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants